Population Training Heuristics
نویسندگان
چکیده
This work describes a new way of employing problem-specific heuristics to improve evolutionary algorithms: the Population Training Heuristic (PTH). The PTH employs heuristics in fitness definition, guiding the population to settle down in search areas where the individuals can not be improved by such heuristics. Some new theoretical improvements not present in early algorithms are now introduced. An application for pattern sequencing problems is examined with new improved computational results. The method is also compared against other approaches, using benchmark instances taken from the literature.
منابع مشابه
Automated Training Set Generation for Aortic Valve Classification
Affecting 1% of the population, bicuspid aortic valve (BAV) is the most prevalent 1 anatomical malformation of the heart. Currently, the limited availability of labeled 2 data hinders the development of automated detection methods. This paper presents 3 a new method for efficiently generating training labels for the BAV classification 4 task. We first define heuristic rules based on geometric f...
متن کاملExploiting Strong Syntactic Heuristics and Co-Training to Learn Semantic Lexicons
We present a bootstrapping method that uses strong syntactic heuristics to learn semantic lexicons. The three sources of information are appositives, compound nouns, and ISA clauses. We apply heuristics to these syntactic structures, embed them in a bootstrapping architecture, and combine them with co-training. Results on WSJ articles and a pharmaceutical corpus show that this method obtains hi...
متن کاملA hybrid genetic-greedy approach to the skills management problem
The Naval Surface Warfare Center wishes to create a task assignment schedule with a minimal training cost for workers to raise their skills to the required levels. As the number of workers, skills, and tasks increase, the problem quickly becomes too large to solve through brute force. Already several greedy heuristics have been produced, though their performance degrades for larger data sets. A...
متن کاملCertifying Some Distributional Robustness with Principled Adversarial Training
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasse...
متن کاملUsing Graphs of Classifiers to Impose Declarative Constraints on Semi-supervised Learning
We propose a general approach to modeling semisupervised learning (SSL) algorithms. Specifically, we present a declarative language for modeling both traditional supervised classification tasks and many SSL heuristics, including both well-known heuristics such as co-training and novel domainspecific heuristics. In addition to representing individual SSL heuristics, we show that multiple heurist...
متن کامل